7 research outputs found

    Mental Rotation of Digitally-Rendered Haptic Objects by the Visually-Impaired.

    Get PDF
    In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation

    Brain mechanisms for perceiving illusory lines in humans.

    Get PDF
    Illusory contours (ICs) are perceptions of visual borders despite absent contrast gradients. The psychophysical and neurobiological mechanisms of IC processes have been studied across species and diverse brain imaging/mapping techniques. Nonetheless, debate continues regarding whether IC sensitivity results from a (presumably) feedforward process within low-level visual cortices (V1/V2) or instead are processed first within higher-order brain regions, such as lateral occipital cortices (LOC). Studies in animal models, which generally favour a feedforward mechanism within V1/V2, have typically involved stimuli inducing IC lines. By contrast, studies in humans generally favour a mechanism where IC sensitivity is mediated by LOC and have typically involved stimuli inducing IC forms or shapes. Thus, the particular stimulus features used may strongly contribute to the model of IC sensitivity supported. To address this, we recorded visual evoked potentials (VEPs) while presenting human observers with an array of 10 inducers within the central 5°, two of which could be oriented to induce an IC line on a given trial. VEPs were analysed using an electrical neuroimaging framework. Sensitivity to the presence vs. absence of centrally-presented IC lines was first apparent at ∼200 ms post-stimulus onset and was evident as topographic differences across conditions. We also localized these differences to the LOC. The timing and localization of these effects are consistent with a model of IC sensitivity commencing within higher-level visual cortices. We propose that prior observations of effects within lower-tier cortices (V1/V2) are the result of feedback from IC sensitivity that originates instead within higher-tier cortices (LOC)

    Sounds enhance visual completion processes.

    Get PDF
    Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the IC <sub>effect</sub> ). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the IC <sub>effect</sub> . We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The IC <sub>effect</sub> was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients

    Mental Rotation of Digitally-Rendered Haptic Objects.

    Get PDF
    Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation

    Auditory Enhancement of Illusory Contour Perception.

    No full text
    Illusory contours (ICs) are borders that are perceived in the absence of contrast gradients. Until recently, IC processes were considered exclusively visual in nature and presumed to be unaffected by information from other senses. Electrophysiological data in humans indicates that sounds can enhance IC processes. Despite cross-modal enhancement being observed at the neurophysiological level, to date there is no evidence of direct amplification of behavioural performance in IC processing by sounds. We addressed this knowledge gap. Healthy adults ( n = 15) discriminated instances when inducers were arranged to form an IC from instances when no IC was formed (NC). Inducers were low-constrast and masked, and there was continuous background acoustic noise throughout a block of trials. On half of the trials, i.e., independently of IC vs NC, a 1000-Hz tone was presented synchronously with the inducer stimuli. Sound presence improved the accuracy of indicating when an IC was presented, but had no impact on performance with NC stimuli (significant IC presence/absence × Sound presence/absence interaction). There was no evidence that this was due to general alerting or to a speed-accuracy trade-off (no main effect of sound presence on accuracy rates and no comparable significant interaction on reaction times). Moreover, sound presence increased sensitivity and reduced bias on the IC vs NC discrimination task. These results demonstrate that multisensory processes augment mid-level visual functions, exemplified by IC processes. Aside from the impact on neurobiological and computational models of vision, our findings may prove clinically beneficial for low-vision or sight-restored patients

    Brain and Cognitive Mechanisms of Top-Down Attentional Control in a Multisensory World: Benefits of Electrical Neuroimaging.

    No full text
    In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top-down audiovisual object representations ("attentional templates") while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via "the N2pc component": spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150-300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030-1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top-down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170-270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top-down object representations, and so not only by separate sensory-specific top-down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top-down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience

    Towards understanding how we pay attention in naturalistic visual search settings.

    Get PDF
    Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control
    corecore